SECURE CONFIGURATION, PERFORMANCE EVALUATION, AND AUDIT OF A HEADLESS LINUX SERVER
Student Name: Smarika Singh Thanuri
Student ID: A00025647
Journal Structure & Navigation
Week 1 – System Planning and Distribution Selection
Distribution Selection Justification
Workstation Selection Justification
Network Configuration Documentation
Week 2 – Security Planning and Testing Methodology
Performance Testing Plan and Remote Monitoring Methodology
Security Configuration Checklist
Threat Model and Mitigation Strategies
Week 3 – Application Selection for Performance Testing
Application Selection Rationale
Installation Documentation (SSH-Based)
Week 4 – Initial System Configuration & Core Security
User Management and Privilege Control
Firewall Configuration and Network Access Control
Remote Administration Evidence
Week 5 – Advanced Security and Monitoring Infrastructure
Mandatory Access Control with AppArmor
Intrusion Detection with fail2ban
Security Baseline Verification Script
Remote Monitoring Infrastructure
Week 6 – Performance Evaluation & Optimisation
Week 7 – Security Audit & System Evaluation
Network Security Assessment (Nmap)
Final Critical Reflection & Conclusion
Security vs Performance Trade-offs
Professional & Industry Relevance
The coursework project concentrates on the systematic arrangement, securing, monitoring, and assessment of a Linux based operating system environment. The fundamental objective of the undertaking is to achieve applied competence in operating headless Linux server and putting into practice basic operating system concepts in the areas of security, performance, and reliability. All the configuration and management activities were carried out by command-line utilities instead of graphical tools, as is the case in the real world.
The project has stressed the correct setting of the operating system as a basis for security and performance. Weak access control, misconfigured services and unmonitored use of resources may also greatly enhance operational risk and system vulnerability. The project illustrates the ability of a properly controlled operating system to facilitate stable and secure delivery of services by gradually implementing security measures, monitoring systems and performance optimisations.
The test system is constructed on a dual-system infrastructure comprising of a dedicated server and a different workstation. The server system operates on Ubuntu Server, and it stores all the core services, which are SSH, firewall controls, monitoring software and web service. It is minimally installed as it does not have a graphical interface to minimize attack surface and overhead of resources [1].
The workstation system is based on Ubuntu Desktop and used to run the administration and test platform. SSH-based access to this machine is made to execute all remote management, performance testing, security auditing, and network analysis.
The project journal will be structured into a seven-week systematic format with each week having a particular phase of system development and evaluation. The weeks in the early part of the year deal with installation, base configuration and the basic security measures whereas the subsequent weeks deal with advanced monitoring, performance testing, optimisation and security auditing. Each part will contain written command outputs, screen shots, and evidence of configuration, and then a reflective analysis on why, what, and how these observations, decisions, and results occurred.
Week 1 was aimed at designing and implementing a basic operating system environment to be used in secure remote administration and subsequent performance assessment. This stage aimed at the implementation of dual-system architecture, which comprised of a headless Linux server and another workstation that was exclusively accessed by the administrators. A focus was made on command-line interaction, network disconnection, and system transparency to make the environment consistent with the professional Linux server administration practices and the evaluation needs.
A dual-system structure was adopted to ensure high command-line proficiency and distant administration customaries. The environment will be composed of an Ubuntu Server virtual machine (target system), and an Ubuntu Desktop virtual machine (administrative workstation). The server does not have a graphical interface and can only be accessed via Secure Shell (SSH) by the workstation.
Figure 1: System Architecture Diagram
(Source: Self-created)
Both systems are linked with a VirtualBox host-only network, which enables communicating in the secluded network of a private subnet. This design ensures that the network is not exposed to the outside world, but the administration has secure access.
Ubuntu Server Long Term Support (LTS) has been chosen because of its stability, long term support security and its widespread usage in the enterprise and cloud environments. LTS release model offers five years of security patches, which is appropriate when the server needs to be deployed over a long period [2].
Figure 2: Ubuntu Server IP Address
(Source: ubuntu server)
Ubuntu Server has better updates as compared to other operating systems like Debian and Rocky Linux, superior community support, and comprehensive official documentation. Debian is stable but tends to be behind in software versions whereas Rocky Linux is business-oriented and might need extra configuration overhead.
The administrative workstation was selected to be Ubuntu Desktop to have an operating system continuity with the server environment. The commonalities in the Linux family of both systems make the management of packages, scripting, and command-line use easy. Ubuntu Desktop has native SSH client tools, monitoring and scripting environments which do not need any further configuration [3].
Figure 3: Ubuntu Desktop IP Address
(Source: ubuntu desktop)
The workstation graphical interface facilitates documentation work including the ability to capture a screenshot, visualise performance, and create journals, but everything about the server administration is thoroughly command-line in nature.
A VirtualBox host-only adapter was used to configure the network and provided an isolated private network between the server and the workstation. This method can provide safe communication without any exposure of virtual machines to the networks or the internet. In the same subnet, the IP addresses of each of the systems were allocated, and SSH access along with performance testing were possible in a reliable way [4].
Figure 4: Successful SSH connectivity between workstation and server
(Source: ubuntu desktop)
The structure is like the best practice of a laboratory-based security testing environment where isolation reduces risk and eliminates unwanted network interactions. The network configuration was operational as the ICMP communication and SSH connectivity were successful.
The standard Linux command-line utilities were used to gather the system specifications on a remote machine through SSH. The identify the version and architecture of the kernel was done with the uname -a command and to acquire information about the available and used memory the free -h command was used.
Figure 5: System Specification Evidence
(Source: ubuntu server)
DF -h was used to analyse disk capacity and utilisation and ip addr to find out network interface details. The version and release digits of the operating system were established by the lsb_release -a. These commands provided a baseline knowledge of the hardware and software environment of the server that will be used later in performance analysis and optimisation.
Week 2 was aimed at developing a detailed security baseline and developing a formal approach to performance testing before system configuration. This stage was centered on proactive planning and not implementation, so that security controls, monitoring plans, and tests procedures had to be well-defined and then applied to the server. This kind of approach is indicative of professional system administration practice, in which design decisions are assessed prior to reducing misconfiguration and security hazard.
The provision of the performance testing will be based on the principle of remote monitoring where all measurements will be captured on the Ubuntu Server through the connections of the Secure Shell (SSH) instigated by the Ubuntu Desktop workstation. This methodology will make the server headless, and all observations will be true to real-life remote administration situation.
The basis of the testing will involve taking measurements at baseline conditions under idle conditions to determine a regular system’s behaviour. Performance indicators of CPU utilisation, memory consumption, disk usage and network activity will be documented through the typical Linux command-line utilities [5]. Such preliminary outcomes will be used as control points in the future.
The controlled workload generation will be followed by testing that will mimic the various operational conditions, such as CPU-intensive, memory-intensive, disk I/O-intensive, and network-intensive loads. The amount of data regarding performance will be gathered after every test to determine the bottlenecks of resources and performance degradation.
Security setting checklist has been designed to specify the controls that need to be carried out in subsequent stages of the course work that are mandatory. All the controls are chosen to mitigate popular Linux server security threats without compromising usability and performance.
SSH Hardening
Secure Shell will be hardened by disabling password-based authorisation, enforcing key-based authorisation and disabling direct root authorisation. These will minimise the vulnerability to brute-force attacks and unauthorised access [6].
Firewall Configuration
A host-based firewall will be set to only allow inbound traffic only with an IP address of the designated workstation with SSH traffic. Any other unsolicited links will automatically be rejected based on the principle of the least exposure.
Mandatory Access Control
AppArmor will be used as a mandatory Access Control to restrict the behaviour of the application and the effect of the compromised services [7]. Imposed profiles will limit file accessibility as well as process rights in addition to conventional discretionary access controls.
Automatic Security Updates
The automatic update option will be activated so that security patches can be implemented in time. This minimises administrative overhead and risk of having unpatched vulnerabilities.
User Privilege Management
System management will involve a non-root administrative user, and only aggressive privileges will be provided on a need basis using sudo. This is a technique based on the least privilege principle and minimizes the possible effect of privileged credentials.
Network Security
Networking will be secluded to a host-only VirtualBox machine so that it is not exposed to other networks. This isolation promotes safe security testing, and it ensures controlled communication between systems.
A threat model has been created to fix possible security threats applicable to the deployed Linux server, and to establish suitable mitigation policies.
Threat 1: Brute-Force SSH Attacks
Attackers can also try to use unauthorised access by trying to reach the SSH service multiple times with repeated logins.
Mitigation: To reduce this risk, key-based authentication, a disabled password-based log-in, restricted SSH access with the help of IP address, and an intrusion detection system like Fail2Ban will be used.
Threat 2: Privilege Escalation
A hacked user account might seek the higher privileges and completely control the system.
Mitigation: Privilege escalation is minimized and diminished by using non-root administrative accounts, local sudo access, and Mandatory Access Control policies.
Threat 3: Unpatched Software Vulnerabilities
The system packages are also outdated and may include the known vulnerabilities that can be used remotely or locally.
Mitigation: Automatic updates of the security and frequent audits of the system will be employed to make sure the vulnerabilities are repaired in time.
The threat modelling exercise informed the choice of security controls and made sure that proposed mitigation measures are direct responses to realistic risks in the operating environment of the system.
Week 3 was aimed at choosing and preparing a collection of applications that shows the various workload properties to be used further in the evaluation of performance. The aim of this phase was to determine which tools could be used to load desired resources on the operating system in a defined way, install them on the headless Ubuntu Server using SSH and define the behaviour and monitoring guidelines. There was no performance benchmarking done at this level to ensure that there was a clear distinction between preparation and evaluation periods.
To test the behaviour of operating systems under a wide range of conditions, the applications were chosen to reflect workloads that were CPU-intensive, memory-intensive, disk I/O intensive, network-intensive, and server based. The tools chosen are lightweight and are commonly used in both academic and professional settings and are appropriate to be implemented in a headless Linux server [9].
The utility of stress was chosen to create controlled loads on the CPU and the memory so that scheduling behaviour and memory management can be observed. The native Linux tools will be used to conduct disk I/O behaviour which will enable direct access to the file system. iperf3, a standard throughput and latency measurement tool, is used to support network performance testing.
Application |
Workload Type |
Primary Resource |
Justification |
stress |
CPU-intensive |
CPU scheduler |
Generates controlled CPU load to evaluate scheduling and utilisation |
stress |
Memory-intensive |
RAM and swap |
Simulates memory pressure and allocation behaviour |
dd |
Disk I/O–intensive |
Disk subsystem |
Enables measurement of read/write throughput and I/O wait |
iperf3 |
Network-intensive |
Network stack |
Measures bandwidth, latency, and packet handling |
nginx |
Server application |
CPU and network |
Represents a realistic production server workload |
This combination ensures all the key system resources being tested later in the testing stages.
The Ubuntu Server was installed with all the applications via the Secure Shell (SSH) of the Ubuntu Desktop workstation, which ensured that the necessary remote administration model was upheld. Since the server is in a host-only network, the temporary settings of both virtual machines were set to NAT so that they would access external package repositories [10]. After checking the successful installation and verification, the systems were returned to a host only set up to reestablish isolation to ensure remote administration was secured.
Figure 6: Installed performance testing and monitoring tools
(Source: ubuntu Desktop)
The system package manager was used to perform installation, and deployment was successful after checking the application version outputs.
To derive a theoretical base with which measured results would be compared, the expected behaviour of every application was identified before being tested.
CPU-intensive workload (stress)
Full utilisation of CPU is anticipated and may even reach its maximum utilisation of cores. It is expected to have increased context switching and increased scheduler activity and little disk or network activity [11].
Memory-intensive workload (stress)
Considerable RAM usage will be incurred which may provoke the usage of swap when under limiting circumstances. This load test will recognize the efficiency in memory allocation and pressure capacity.
Disk I/O–intensive workload (dd)
The disk read/write throughput should be high, and more I/O wait times should be realized. The CPU usage is expected to be low in comparison with I/O activity.
Network-intensive workload (iperf3)
Utilisation of high network bandwidth is anticipated, as well as more packet processing and handling of interrupts. Disks activity must be kept to a minimum.
Server workload (nginx)
It is expected that there will be moderate CPU and network use, which is an actual service deployment scenario. This load is a combination of several resource compulsions as well as the general server behaviour [12].
The monitoring strategy was specified in such a way that it would make performance measurement consistent and repeatable in subsequent testing stages. Measures will be collected remotely using SSH to use standard Linux command-line tools to collect metrics.
Metric |
Tool |
Measurement Purpose |
CPU utilisation |
top / htop |
Identify CPU saturation and scheduling behaviour |
Memory usage |
free -h |
Monitor RAM and swap consumption |
Disk usage and I/O |
df -h, iostat |
Detect storage bottlenecks and I/O wait |
Network performance |
iperf3, ip -s link |
Measure throughput, packet counts, and errors |
Service status |
systemctl status |
Observe server application behaviour |
This will give a minimal overhead with an adequate amount of visibility about the performance of the system.
Phase 3 effectively laid the groundwork to performance evaluation through the choice of apposite workload generating applications, deployment of the applications using secure remote administration, and specification of the expected behaviour and monitoring strategies. This strategy guarantees methodological transparency by postponing execution and benchmarking into subsequent stages and facilitates proper analysis of operating system behaviour of under controlled workloads.
Week 4 was aimed at launching the Ubuntu Server in a secure operational environment through application of basic system security controls. This stage was concerned with ensuring least-privilege access, hardening the remote administration controls, and minimizing the network attack surface without creating an impact on the secure and reliable remote management.
A special non-root administrative account on the server was created to implement the principle of least privilege. This user was granted controlled administrative privileges by being a member of the sudo group, so that the privilege escalation could only occur on need basis [13].
Figure 7: User Management and Privilege Control
(Source: ubuntu Desktop)
It did not use direct root access because of the high vulnerability of credential compromise and unintentional system-wide modifications [14]. The administration functions were done with sudo and this provided accountability and controlled privileges elevation.
Figure 8: Verification was conducted by logging in as the newly created administrative user
(Source: ubuntu Desktop)
Role separation and access control were verified by logging in and using sudo to enter privileged commands as the newly created administrative user.
Safe remote administration was provided through SSH key authentication. The workstation generated a cryptographic key pair and was deployed to the server in a secure manner. The method avoids the use of password-based authentication which is susceptible to brute-force and credential reuse attack [15].
Figure 9: SSH key-based authentication configuration
(Source: ubuntu Desktop)
After the main deployment, SSH access was confirmed and the authentication was made successful without password requests. This has ensured that the server was only accessed by clients who had the right private key.
The hardening of the SSH daemon was further achieved by configuring the authentication and access behaviour to limit its access. Before changing the configuration file, it was saved as a backup file to be compared and recovered in case of need.
Figure 10: SSH Hardening Configuration
(Source: ubuntu Desktop)
Among the critical security modifications were the turning off direct root logins, the turning off password-based authentication, and the turning on of public key authentication [16] explicitly. These help a great deal to minimise the attack surface by blocking unauthorised access attempts and implementing strong authentication mechanisms.
Figure 11: SSH service was safely restarted
(Source: ubuntu Desktop)
Once the changes had been applied, the SSH service was restarted safely and its status confirmed. The connectivity was then tested out between the workstation to ensure that secure access was still available following hardening.
Uncomplicated Firewall (UFW) was used to implement a host-based firewall to ensure strict rules are given to incoming traffic. The default policy would have rejected all the incoming connections and allowed outbound traffic so that only those services that were explicitly authorised can be accessed.
Figure 12: Firewall rules permitting restricted SSH access
(Source: ubuntu Desktop)
The access to SSH was limited to one IP address of a trusted workstation. This made sure that SSS credentials were compromised, authorised administrative system [17] would only be accessible. Firewall was turned on and the entire set of rules was checked to ensure proper enforcement.
Figure 13: Successful SSH login from authorised workstation
(Source: ubuntu Desktop)
Post-configuration testing revealed that SSH access was functional even when at the authorised workstation and all the other inbound access was rejected.
The system administration activities at this stage were done remotely through SSH of the workstation system. Remote access was successfully achieved by authenticated SSH connections, running of administrative commands and privilege escalation under control through sudo [18].
Figure 13: Remote administration commands executed via SSH
(Source: ubuntu Desktop)
This is to ensure that the server is securely controlled without access in physical or console format and this is the actual practice of server administration in the real world.
The security measures that were in place during this stage boosted the baseline security configuration of the server substantially. Using key-based SSH authentication and tight control on privileges and host-based firewall controls, exposure to most of the common attack vectors, including brute-force logins and unauthorised access to the network, was significantly mitigated [19].
Week 4 was successful in moving the Ubuntu Server to a non-secure environment to a securely mounted system that can be administered and analyzed further. Operating principles of core security like least privilege, defence in depth and secure remote access were practically and measurably implemented. The server is now ready to be followed up systematically and performance baseline examined in the following step.
Week 5 was devoted to the enhancement of the security position of the server by introducing advanced security controls and building remote monitoring capabilities. This week, the foundation on the security settings that have been introduced in the previous steps was the introduction of such measures as mandatory access control, automated patch management, intrusion detection, automation of security verification, and remote performance monitoring.
To implement the restrictions concerning application-level access, the AppArmor mandatory access control framework was implemented on the Ubuntu Server. AppArmor status was first checked with aa-status command and the status of the AppArmor kernel module was confirmed to be loaded and active.
Figure 14: AppArmor profile status
(Source: ubuntu Desktop)
One of the application profiles had been established and switched to active enforcement mode [20]. Transmission application profile that is at the /etc/apparmor.d/transmission was specifically configured to apply mode with aa-enforce. This made sure that the application was restricted to predefined permissions so that there was unauthorised access to system resources.
Figure 15: AppArmor profile status and enforcement mode
(Source: ubuntu Desktop)
The profile was enforced and was verified subsequently with aa status. This showed how the concept of mandatory access control was able to be used to minimize the attack surface and minimize the range of consequences that affected applications would have.
To make sure that the system was not vulnerable to the new vulnerabilities, automatic security updates were set up using the unattended-upgrades package. The service was set up and enabled, which made the server automatically download and implement important security patches without human intervention [21].
Figure 16: Automatic security updates service status
(Source: ubuntu Desktop)
The configuration verification was conducted through the inspection of the file /etc/apt/apt.conf.d/20auto-upgrades checking that the package list updates and unattended upgrades were both activated. Systemctl status unattended-upgrades were also used to verify the operational status of the service and indicated that the service was active and running.
Fail2ban was installed and configured to avoid brute-force and repeated attempts of unauthorised access. Local configuration file (jail.local) was developed to allow protection of the SSH service [22]. The SSH jail had limits to the number of retries and temporary blockage to block the malicious IP addresses with strange authentication behaviour.
Figure 17: Fail2ban SSH protection and jail status
(Source: ubuntu Desktop)
Fail2ban service was re-initiated to implement the configuration and operational status checked with fail2ban-client status and fail2ban-client status sshd. It was determined that the SSH jail was running and was checking on authentication logs as it was expected.
Figure 18: fail2ban-client status
(Source: ubuntu Desktop)
Such implementation improves defensive capabilities of the server as it offers real-time and automatic reactions to the intrusion attempts.
To summarize and check the implemented security configurations during Phases 4 and 5, a security baseline verification script (security-baseline.sh) has been written. The script served to execute on the server through SSH and automatically report on key security controls that could be configured using user privileges, SSH hardening preferences, firewall, intrusion detection and automatic update services [23].
Figure 19: Security baseline verification script execution
(Source: ubuntu Desktop)
The script has been created as executable and has been run successfully giving a clear output which validates the security posture of the system.
It was created and run on the workstation system, a remote monitoring script (monitor-server.sh). The script was linked safely to the server through the SSH authentication based on key and it gathered real-time performance indicators, such as the CPU load, memory consumption, disk consumption, and network traffic [24].
Figure 20: Remote monitoring script collecting system metrics
(Source: ubuntu Desktop)
The performance of the monitoring script as executed on the workstation has proven to be effective on remote administration and real time monitoring of server performance without the need to be locally connected. This solution represents monitoring practices used by industries and helps to maintain a proactive system.
Introduction of enhanced security and monitoring systems in Week 5 enhanced the server environment in terms of strength and manageability to a considerable extent. The compulsory access control, automated patching, and intrusion detection helped to decrease the risks of attack, whereas the automation scripts and the monitoring of activity in a distant place increased the efficiency of operations.
Week 6 intended to measure the operating system performance at different workloads and the behaviour of system resources to the stress conditions. This step was dedicated to baseline measurement, controlled load testing, performance bottleneck detection, and performing specific optimisations. CPU, memory, disk I/O and network performance quantitative measures were obtained and re-tested to determine the efficacy of implemented improvements.
Baseline performance testing was carried out to have a point of reference on how the system performs when it is idle. Normal Linux tools such as top, free -h, uptime and iostat were used to monitor the server. These tests were run without any workload that was active on the system.
Figure 21: Baseline CPU and memory utilisation under idle conditions
(Source: ubuntu Desktop)
The outcome showed that the system was mostly idle, whereby CPU utilisation was never above 3 percent and load averages were near zero. The use of memory was constant, and about 360-390MB was utilized of 1.9 GB of the total RAM. There was no swap activity, which ensured that there was enough memory when the machine was idle [25]. There was little disk I/O and iostat showed high idle percentages and zero read or write operations. The network activity was also low indicating background traffic only.
CPU Stress Testing
Stress testing of the CPU was done using the stress command stress --cpu 2 --timeout 60. Top indicated CPU utilisation of about 98 percent during the execution time with the load averages at around 0.85. This was a confirmation of successful CPU saturation and exhibit of the behaviour of the system under heavy computational load.
Figure 22: CPU stress test execution and utilisation impact
(Source: ubuntu Desktop)
Memory Pressure Testing
Stress testing of memory used was stress --vm 1 --vm-bytes 512M -timeout 60. Memory consumption went up to about 700 MB whereas the swap consumption was 0 bytes. This meant that the system was efficient in the management of memory pressure without having to swap disks.
Figure 23: Memory pressure testing using stress utility
(Source: ubuntu Desktop)
Disk I/O Testing
The disk performance was measured with the following command dd if=/dev/zero of=testfile bs=1G count=1 oflag=direct. The test recorded an average rate of write throughput being about 492 MB/s. iostat output confirmed that more disk was being utilised in the process and then it returned to idle, when the test was over.
Figure 24: Disk I/O performance measurement using dd
(Source: ubuntu Desktop)
These tests, as a group, were used to show how resources would be used under a load and gave unambiguous data regarding bottlenecks.
There were measurements of network performance in terms of latency and throughput. Latency testing was done between the workstation by ping -c 10 to server. The round-trip times were found to be around 0.3-0.4 milliseconds on average, and no packet loss was reported. This was a sign of a local network connection that was stable and of low latency.
Figure 25: Network latency measurement using ping
(Source: ubuntu Desktop)
The iperf3 version has been used to perform throughput testing, the server was served in the listening mode, and the workstation served as a client. The tests always gave throughput rates of 2.2 to 2.5 Gbps within 30 s period. Bitrate variations were also minor and as expected with continuous transmission, but the performance did not change dramatically.
Figure 26: Network throughput measurement using iperf3
(Source: ubuntu Desktop)
These findings validated that the network connection between the two systems was not a bottleneck in the performance testing and could support high rate of data transfer.
Data analysis of the obtained data revealed that CPU utilisation was the major bottleneck when there was a load. The utilisation during the load test on the CPU peaked at capacity levels, whereas memory, disk, and network resources were kept within reasonable limits. This implies that compute-bound workloads would be performance limited prior to other subsystems.
There were short-lived bursts of disk I/O performance during write operations, but it went back to idle rather quickly, indicating that there was no long-term storage bottleneck.
There are two optimisations that were used depending on the observed results. To begin with, Nginx gzip compression was implemented by changing the /etc/nginx/nginx.conf file. This optimisation enhances efficiency in services as response payloads are minimised and hence reduce CPU and network overhead of web traffic. An effective deployment was done and was confirmed by using curl -I, which ensured the continued availability of the service.
Figure 27: Nginx Gzip Compression (Service-Level) Performance improvements
(Source: ubuntu Desktop)
Second, the vm.swappiness of the virtual machine was minimised with sysctl and set to 10. This modification gave priority to the utilisation of RAM than swap, akin to known memory capacity and avoids unnecessary disk access when under moderate load.
Figure 28: Memory Management Tuning (OS-Level) Performance improvements
(Source: ubuntu Desktop)
Re-testing showed that memory usage was stabilised at lower levels when optimised and better values of free and available memory. The swap usage was kept at zero and the systems responsiveness was maintained under stressful times.
Metric Baseline Under Load Optimisation After Optimisation.
Metric |
Baseline |
Under Load |
After Optimisation |
CPU Utilisation |
~2% |
~98% |
~95–98% |
Load Average |
0.00 |
0.85 |
0.70 |
Memory Used |
~380 MB |
~700 MB |
~390 MB |
Swap Usage |
0 B |
0 B |
0 B |
Disk Write Speed |
N/A |
~492 MB/s |
~492 MB/s |
Network Latency |
~0.3 ms |
~0.4 ms |
~0.3 ms |
Network Throughput |
N/A |
~2.2–2.5 Gbps |
~2.3–2.5 Gbps |
The charts below were created to help with analysis:
Comparison of CPU utilisation (idle and stressed)
Memory consumption prior and after optimisation.
iperf3 time series network throughput.
Figure 29: CPU utilisation under baseline, stress load, and post-optimisation conditions
(Source: self-created)
Figure 30: Memory usage comparison before load, during memory stress, and after optimisation
(Source: self-created)
Figure 31: Network throughput measured using iperf3 during load testing and post-optimisation
(Source: self-created)
These visualisations are the complement to the raw metrics provided and they are used to demonstrate the behaviour of the systems at various workloads.
Week 7 was aimed at performing a thorough security audit of the configured Linux server to test its overall security posture. The tasks of this phase were to conduct an assessment at the infrastructure level with industry-standard tools, check access controls, checking the services presented by the network, and assess system hardening.
The system security audit was done according to Lynis version 3.0.9 which is a well-known auditing and hardening tool used to audit Unix-based systems. The baseline scan was done in the normal audit mode to determine the security settings and hardening of the system, and the adherence to the best practices. The hardening index of 64 that was reported by the baseline audit saw 265 individual tests being conducted. The firewall was identified as operational and some areas of the system like the PAM session hardening and process accounting were found as areas of improvement.
Figure 32: Baseline Lynis security audit results
(Source: ubuntu desktop)
Remediation actions specific to Lynis recommendations were used. The libpam-tmpdir package was installed to guarantee the secure treatment of the temporary directories within the PAM sessions, which would minimize the possibility of the leakage of information. Moreover, the acct package was also installed and conflicted to permit process accounting to provide better monitoring and forensic analysis of the executed commands. The following remediation actions were a direct response to Lynis recommendations and enhanced the auditing and session control of the system.
Figure 33: Lynis audit after remediation and hardening
(Source: ubuntu desktop)
After the remediation a second audit of Lynis was carried out to affirm the efficiency of implemented changes. The post-remediation scan had a better hardening index of 65, which proved that the security posture of the system had been measurably increased. Although certain recommendations still existed like the choice of optional use of malware scanner, the general outcome showed a well-hardened system that could be used as per the intended purpose.
Nmap was used to perform a network-level security scan to enumerate the exposed services and calculate the external attack surface of the server. A SYN scan consisting of service detection was done, and a complete TCP port scan was carried out. The findings showed that there were only three open ports, namely, 22/tcp (SSH), 80/tcp (HTTP), and 5201/tcp (iperf3). All the other ports were confirmed to be closed, which enabled the system to have reduced exposure to unsolicited network traffic.
Figure 34: Nmap network scan showing exposed services
(Source: ubuntu desktop)
Most open ports were examined and explained according to the operational needs. SSH was needed to provide secure remote administration including HTTP to support the Nginx web service and the port 5201 was temporarily opened to do controlled network performance testing with iperf3 during Week 6. Unnecessary or unexpected services were not identified.
Systemd was used to audit the running services of the system to determine the existence of all active services and appraise their necessity. Some of the core services like ssh.service, nginx.service, rsyslog.service, and systemd-journald.service were operational and had justifiable functionality in terms of administration, web hosting and logging. Other security oriented services such as fail2ban.service and unattended-upgrades.service were also operating which offered automated intrusion prevention and patch management.
Figure 35: Service Audit
(Source: ubuntu desktop)
Components that are involved in performance-testing and which are non-permanent were found to be temporary services, e.g. iperf3.service. The stable operation of a system also demanded the network and device management services, such as systemd-networkd and udisks2. None of the redundant or high-risk services were found.
Although a high level of security has been secured, there are still some residual risks and constraints. A specific malware scanner was not installed; this aspect was pointed out by Lynis as a possible improvement. This was acceptable because the system was in a controlled laboratory setting with limited access and exposure to the outside who were limited.
The system is already used as HTTP traffic, as opposed to HTTPS, which is not the worst in the conditions of an internal network but would demand that TLS is implemented in the production environment.
One of the key project learning outcomes was that I acquired good command-line interface (CLI) expertise. System configuration, monitoring, performance testing and security auditing were performed in almost all phases using terminal based programs like top, iostat, ss, nmap, lynis and stress. The frequent practice of such commands enhanced comfort in navigating Linux systems effectively, understanding the command output correctly and troubleshooting problems of ports and service failures, and permission errors without referring to the graphical interfaces.
Knowledge of behaviour of operating systems in different conditions was also enhanced in the project. With systematic baseline and load testing, the correlation of CPU utilisation, memory pressure, disk I/O waiting time, and network throughput was visible. During stress testing, it was shown by looking at real-time system metrics that the Linux scheduler takes care of resource allocation and that tuning parameters, including swappiness, can be used to change memory behaviour. The practical exposure was able to close the gap between theoretical OS concepts and system behaviour.
Systematic hardening and auditing were also used to strengthen security consciousness. The introduction of AppArmor enforcement, Fail2Ban, firewall rules, automatic updating and Lynis and Nmap audits meant an insightful experience on layered defence mechanisms. Instead of viewing security as a one-time set up process, the project made it clear that constant monitoring, evaluation, and gradual refinement is necessary to ensure the presence of a secure operating environment.
The trade off that was learned as one of the most valuable lessons was the nature of the security hardening and performance of the system. Additional processing overhead is due to security controls like AppArmor profiles, Be monitor Fail2Ban, and massive logging. Although this overhead was negligible in a normal operation, it was noticeable in high load conditions when the CPU and memory resources were stretched. This emphasized the need to balance protection mechanisms and performance needs especially where resources are limited.
Such trade-off is also provided by monitoring and auditing tools. Constant operations of systems logging, intrusion detection and automated updates use background resources and can slightly impact the response times during peak workload. But performance testing helped to prove that such effects were tolerable in comparison with the acquired security benefits. The system was capable of stable functioning even in the conditions of CPU, memory, disk, and network overload, which means that the selected security controls had been properly scaled.
Some of these effects can be alleviated through performance tuning as demonstrated by the optimisation activities. The changes to increase memory efficiency and network performance, including decreasing swoppiness and using gzip compression in Nginx, did not undermine security controls.
The abilities and practices that gained in the context of this project are highly suggestive of the industry requirements of system administrators, DevOps engineers, and cybersecurity experts. This compatibility of the Linux systems to be configured, monitored, audited and optimised using command-line tools is indicative of real-life operational settings where automation, scripting and remote management are the norm. The work with tools like Lynis, Nmap, Fail2Ban, and performance benchmarking tools resembles the frequent workflows in the enterprise and cloud infrastructure.
In addition, the systematic, step-by-step practice embraced during the project represents the industry lifecycle models which consist of defining baseline assessment, implementation, validation, optimisation, and review. The focus on evidence-based decision-making, analysis of quantitative performance, and documented risk assessment proves the practices of professional level system evaluation.
Raja and Anish Bhethanabotla, “OperateLLM: Integrating Robot Operating System (ROS) Tools in Large Language Models,” pp. 1–4, Sep. 2024, doi: https://doi.org/10.1109/icocet63343.2024.10730448.
Bambang Irawan, Kholid Nur Sheha, M. Rahaman, Nixon Erzed, and Agus Herwanto, “Evaluating the Effectiveness of Center of Internet Security Benchmark for Hardening Linux Servers Against Cyber Attacks,” Journal of Social Research, vol. 4, no. 6, pp. 1172–1183, 2025, doi: https://doi.org/10.55324/josr.v4i6.2544.
“Enhance Linux Security Server Misconfigurations and hardening Methods,” Information Sciences Letters, vol. 12, no. 3, pp. 1285–1298, Mar. 2023, doi: https://doi.org/10.18576/isl/120319.
L. Gonzalez and S. Napetvaridze, “Hardening University Servers Against Threat Actors : Enhanced Security Measures,” DIVA, 2025. https://www.diva-portal.org/smash/record.jsf?pid=diva2:1926000
F. Rahman, “Hardening A Web Server Infrastructure : An Applied Study of TLS, Reverse Proxy Security, and Attack Simulations,” Theseus.fi, 2025, doi: http://www.theseus.fi/handle/10024/894672.
G. RA, S. KIM, and I. LEE, “Identity Access Management via ECC Stateless Derived Key Based Hierarchical Blockchain for the Industrial Internet of Things,” IEICE Transactions on Information and Systems, vol. E105.D, no. 11, pp. 1857–1871, Nov. 2022, doi: https://doi.org/10.1587/transinf.2022ngp0003.
J. R. Amalraj and R. Lourdusamy, “A Novel Distributed Token-Based Access Control Algorithm Using A Secret Sharing Scheme for Secure Data Access Control,” International Journal of Computer Networks and Applications, vol. 9, no. 4, p. 374, Aug. 2022, doi: https://doi.org/10.22247/ijcna/2022/214501.
M. H. Mohd Ghazali and W. Rahiman, “Vibration Analysis for Machine Monitoring and Diagnosis: A Systematic Review,” Shock and Vibration, vol. 2021, p. e9469318, Sep. 2021, doi: https://doi.org/10.1155/2021/9469318.
Hamza Touil, Ferdaous Hdioud, Nabil El Akkad, and K. Satori, “The Security of SSH Protocol Public Key Sharing in the Post-Quantum Era,” International Journal of Computing, pp. 317–323, Oct. 2024, doi: https://doi.org/10.47839/ijc.23.3.3650.
G. RA, S. KIM, and I. LEE, “Identity Access Management via ECC Stateless Derived Key Based Hierarchical Blockchain for the Industrial Internet of Things,” IEICE Transactions on Information and Systems, vol. E105.D, no. 11, pp. 1857–1871, Nov. 2022, doi: https://doi.org/10.1587/transinf.2022ngp0003.
J. R. Amalraj and R. Lourdusamy, “A Novel Distributed Token-Based Access Control Algorithm Using A Secret Sharing Scheme for Secure Data Access Control,” International Journal of Computer Networks and Applications, vol. 9, no. 4, p. 374, Aug. 2022, doi: https://doi.org/10.22247/ijcna/2022/214501.
G. RA, S. KIM, and I. LEE, “Identity Access Management via ECC Stateless Derived Key Based Hierarchical Blockchain for the Industrial Internet of Things,” IEICE Transactions on Information and Systems, vol. E105.D, no. 11, pp. 1857–1871, Nov. 2022, doi: https://doi.org/10.1587/transinf.2022ngp0003.
K. Kanellopoulos et al., “Virtuoso: Enabling Fast and Accurate Virtual Memory Research via an Imitation-based Operating System Simulation Methodology,” pp. 1400–1421, Mar. 2025, doi: https://doi.org/10.1145/3676641.3716027.
M. H. Mohd Ghazali and W. Rahiman, “Vibration Analysis for Machine Monitoring and Diagnosis: A Systematic Review,” Shock and Vibration, vol. 2021, p. e9469318, Sep. 2021, doi: https://doi.org/10.1155/2021/9469318.
C. Shui, R. Shao, S. Wei, H. Li, D. Zhou, and L. Xu, “Compressor Operation Monitoring and Optimization Method for Large-scale Natural Gas Pipelines,” Journal of Physics: Conference Series, vol. 2567, no. 1, p. 012002, Aug. 2023, doi: https://doi.org/10.1088/1742-6596/2567/1/012002.
A. Livera, Marios Theristis, L. Micheli, E. F. Fernández, J. S. Stein, and G. E. Georghiou, “Operation and Maintenance Decision Support System for Photovoltaic Systems,” IEEE Access, vol. 10, pp. 42481–42496, Jan. 2022, doi: https://doi.org/10.1109/access.2022.3168140.
Z. Ma, L. Qiao, M.-F. Yang, S.-F. Li, and J.-K. Zhang, “Verification of Real Time Operating System Exception Management Based on SPARCv8,” Journal of Computer Science and Technology, vol. 36, no. 6, pp. 1367–1387, Nov. 2021, doi: https://doi.org/10.1007/s11390-021-1644-x.
I. Atanasov, “From Firewall to AI: Strengthening Linux Server Security,” Science, Engineering and Education, vol. 9, no. 1, pp. 3–12, Nov. 2024, doi: https://doi.org/10.59957/see.v9.i1.2024.1.
M. Mingze, “Research and Application of Firewall Log and Intrusion Detection Log Data Visualization System,” IET Software, vol. 2024, no. 1, Jan. 2024, doi: https://doi.org/10.1049/2024/7060298.
J. S. Majid, “Building A Firewall And Intrusion Detection System Dased Network Security System Using Opnsense Tools,” Iraqi Journal of Intelligent Computing and Informatics (IJICI), vol. 4, no. 1, pp. 66–76, Apr. 2025, doi: https://doi.org/10.52940/ijici.v4i1.96.
M. Schröder and J. Cito, “An empirical investigation of command-line customization,” Empirical Software Engineering, vol. 27, no. 2, Dec. 2021, doi: https://doi.org/10.1007/s10664-021-10036-y.
A. Manowska, M. Boroš, A. Bluszcz, and K. Tobór-Osadnik, “The use of the command line interface in the verification and management of the security of IT systems and the analysis of the potential of integrating biometric data in cryptographic mechanisms,” Scientific Papers of Silesian University of Technology. Organization and Management Series, vol. 2024, no. 198, pp. 289–308, 2024, doi: https://doi.org/10.29119/1641-3466.2024.198.16.
F. Yue, J. Wang, J. Xu, and X. Feng, “Scenario-based design for teaching Linux OS operations and maintenance,” Advances in Education, Humanities and Social Science Research, vol. 1, no. 2, pp. 202–202, Sep. 2022, doi: https://doi.org/10.56028/aehssr.2.1.202.
O. Akporhuarho, “Developing a system hardening solution for development environments and small businesses : simplifying automated Linux security hardening,” Theseus.fi, 2025, doi: https://www.theseus.fi/handle/10024/894522.
I. Atanasov, “From Firewall to AI: Strengthening Linux Server Security,” Science, Engineering and Education, vol. 9, no. 1, pp. 3–12, Nov. 2024, doi: https://doi.org/10.59957/see.v9.i1.2024.1.